A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

author

  • Narges Bidabadi Assistant Professor, Department of Mathematical Sciences, Yazd University, Yazd, Iran
Abstract:

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value of this matrix is not greater than the number of variables of the problem. Then, using this double parameter scaled quasi Newton formula, an explicit formula for calculating the step length in the steepest descent method is presented and therefore, this method does not require the use of approximate methods for calculating step length. The numerical results obtained from the implementation of the algorithm in MATLAB software environment are presented for some optimization problems. These results show the efficiency of the proposed method in comparison with other existing methods.

Upgrade to premium to download articles

Sign up to access the full text

Already have an account?login

similar resources

A Modified Algorithm of Steepest Descent Method for Solving Unconstrained Nonlinear Optimization Problems

The steepest descent method (SDM), which can be traced back to Cauchy (1847), is the simplest gradient method for unconstrained optimization problem. The SDM is effective for well-posed and low-dimensional nonlinear optimization problems without constraints; however, for a large-dimensional system, it converges very slowly. Therefore, a modified steepest decent method (MSDM) is developed to dea...

full text

SVM-Optimization and Steepest-Descent Line Search

We consider (a subclass of) convex quadratic optimization problems and analyze decomposition algorithms that perform, at least approximately, steepest-descent exact line search. We show that these algorithms, when implemented properly, are within ǫ of optimality after O(log 1/ǫ) iterations for strictly convex cost functions, and after O(1/ǫ) iterations in the general case. Our analysis is gener...

full text

Steepest descent method for solving zero-one nonlinear programming problems

In this paper we use steepest descent method for solving zero-one nonlinear programming problem. Using penalty function we transform this problem to an unconstrained optimization problem and then by steepest descent method we obtain the original problem optimal solution. 2007 Elsevier Inc. All rights reserved.

full text

A New Steepest Descent Differential Inclusion-Based Method for Solving General Nonsmooth Convex Optimization Problems

In this paper, we investigate a steepest descent neural network for solving general nonsmooth convex optimization problems. The convergence to optimal solution set is analytically proved. We apply the method to some numerical tests which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.

full text

A derivative–free nonmonotone line search technique for unconstrained optimization

A tolerant derivative–free nonmonotone line search technique is proposed and analyzed. Several consecutive increases in the objective function and also non descent directions are admitted for unconstrained minimization. To exemplify the power of this new line search we describe a direct search algorithm in which the directions are chosen randomly. The convergence properties of this random metho...

full text

On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(ǫ) to drive the norm of the gradient below ǫ. This shows that the upper bound of O(ǫ) evaluations known for the steepest descent is tight, and that Newton’s method may be as slow a...

full text

My Resources

Save resource for easier access later

Save to my library Already added to my library

{@ msg_add @}


Journal title

volume 6  issue 26

pages  159- 166

publication date 2020-10-22

By following a journal you will be notified via email when a new issue of this journal is published.

Hosted on Doprax cloud platform doprax.com

copyright © 2015-2023